This case study is based on a very famous dataset in Machine Learning. The German Credit Risk dataset. The goal is to predict if this loan credit would be a risk to the bank or not?
In simple terms, if the loan amount is given to the applicant, will they pay back or become a defaulter?
Since there are many applications which needs to be processed everyday, it will be helpful if there was a predictive model in place which can assist the executives to do their job by giving them a heads up about approval or rejection of a new loan application.
In below case study I will discuss the step by step approach to create a Machine Learning predictive model in such scenarios. You can use this flow as a template to solve any supervised ML classification problem.
The flow of the case study is as below:
I know its a long list!! Take a deep breath... and let us get started!
This is one of the most important steps in machine learning! You must understand the data and the domain well before trying to apply any machine learning algorithm.
The file used for this case study is "CreditRiskData.csv". This file contains the historical data of the good and bad loans issued.
The goal is to learn from this data and predict if a given loan application should be approved or rejected!
The business meaning of each column in the data is as below
More detailed information about this dataset can be found here
# Supressing the warning messages
import warnings
warnings.filterwarnings('ignore')
# Reading the dataset
import pandas as pd
import numpy as np
CreditRiskData=pd.read_csv('/Users/farukh/Python Case Studies/CreditRiskData.csv', encoding='latin')
print('Shape before deleting duplicate values:', CreditRiskData.shape)
# Removing duplicate rows if any
CreditRiskData=CreditRiskData.drop_duplicates()
print('Shape After deleting duplicate values:', CreditRiskData.shape)
# Printing sample data
# Start observing the Quantitative/Categorical/Qualitative variables
CreditRiskData.head(10)
Based on the problem statement you can understand that we need to create a supervised ML classification model, as the target variable is categorical.
%matplotlib inline
# Creating Bar chart as the Target variable is Categorical
GroupedData=CreditRiskData.groupby('GoodCredit').size()
GroupedData.plot(kind='bar', figsize=(4,3))
The data distribution of the target variable is satisfactory to proceed further. There are sufficient number of rows for each category to learn from.
This step is performed to understand the overall data. The volume of data, the types of columns present in the data. Initial assessment of the data should be done to identify which columns are Quantitative, Categorical or Qualitative.
This step helps to start the column rejection process. You must look at each column carefully and ask, does this column affect the values of the Target variable? For example in this case study, you will ask, Does this column affect the approval or rejection of loan? If the answer is a clear "No" the remove the column immediately from the data otherwise keep the column for further analysis.
There are four commands which are used for Basic data exploration in Python
# Looking at the sample rows in the data
CreditRiskData.head()
# Observing the summarized information of data
# Data types, Missing values based on number of non-null values Vs total rows etc.
# Remove those variables from data which have too many missing values (Missing Values > 30%)
# Remove Qualitative variables which cannot be used in Machine Learning
CreditRiskData.info()
# Looking at the descriptive statistics of the data
CreditRiskData.describe(include='all')
# Finging unique values for each column
# TO understand which column is categorical and which one is Continuous
# Typically if the numer of unique values are < 20 then the variable is likely to be a category otherwise continuous
CreditRiskData.nunique()
Based on the basic exploration above, you can now create a simple report of the data, noting down your observations regaring each column. Hence, creating a initial roadmap for further analysis.
The selected columns in this step are not final, further study will be done and then a final list will be created
We can spot a categorical variable in the data by looking at the unique values in them. Typically a categorical variable contains less than 20 Unique values AND there is repetition of values, which means the data can be grouped by those unique values.
Based on the Basic Data Exploration above, we have spotted seventeen categorical predictors in the data
Categorical Predictors: 'checkingstatus', 'history', 'purpose','savings','employ', 'installment', 'status', 'others','residence', 'property', 'otherplans', 'housing', 'cards', 'job', 'liable', 'tele', 'foreign'
We use bar charts to see how the data is distributed for these categorical columns.
Since there are so many categorical predictors! We will call below function for 5 at a time.
# Plotting multiple bar charts at once for categorical variables
# Since there is no default function which can plot bar charts for multiple columns at once
# we are defining our own function for the same
def PlotBarCharts(inpData, colsToPlot):
%matplotlib inline
import matplotlib.pyplot as plt
# Generating multiple subplots
fig, subPlot=plt.subplots(nrows=1, ncols=len(colsToPlot), figsize=(20,5))
fig.suptitle('Bar charts of: '+ str(colsToPlot))
for colName, plotNumber in zip(colsToPlot, range(len(colsToPlot))):
inpData.groupby(colName).size().plot(kind='bar',ax=subPlot[plotNumber])
#####################################################################
# Calling the function for 5 columns
PlotBarCharts(inpData=CreditRiskData,
colsToPlot=['checkingstatus', 'history', 'purpose','savings','employ'])
#####################################################################
# Calling the function for 5 columns
PlotBarCharts(inpData=CreditRiskData,
colsToPlot=['installment', 'status', 'others','residence', 'property'])
#####################################################################
# Calling the function for 4 columns
PlotBarCharts(inpData=CreditRiskData,
colsToPlot=['otherplans', 'housing', 'cards', 'job'])
#####################################################################
# Calling the function for 3 columns
PlotBarCharts(inpData=CreditRiskData,
colsToPlot=['liable', 'tele', 'foreign'])
These bar charts represent the frequencies of each category in the Y-axis and the category names in the X-axis.
The ideal bar chart looks like the chart of "property" column. Where each category has comparable frequency. Hence, there are enough rows for each category in the data for the ML algorithm to learn.
If there is a column which shows too skewed distribution like "foreign" where there is only one dominant bar and the other categories are present in very low numbers. These kind of columns may not be very helpful in machine learning. We confirm this in the correlation analysis section and take a final call to select or reject the column.
In this data, all the categorical columns except "foreign" and "others" have satisfactory distribution for machine learning.
Selected Categorical Variables: All the categorical variables are selected with a doubt on "foreign" and "others".
'checkingstatus', 'history', 'purpose','savings','employ', 'installment', 'status', 'others','residence', 'property', 'otherplans', 'housing', 'cards', 'job', 'liable', 'tele', 'foreign'
Based on the Basic Data Exploration, There are Three continuous predictor variables 'duration', 'amount',and 'age'.
# Plotting histograms of multiple columns together
# Observe that ApplicantIncome and CoapplicantIncome has outliers
CreditRiskData.hist(['age', 'amount','duration'], figsize=(18,10))
Histograms shows us the data distribution for a single continuous variable.
The X-axis shows the range of values and Y-axis represent the number of values in that range. For example, in the above histogram of "age", there are around 260 rows in data that has age between 25 to 30.
The ideal outcome for histogram is a bell curve or slightly skewed bell curve. If there is too much skewness, then outlier treatment should be done and the column should be re-examined, if that also does not solve the problem then only reject the column.
Selected Continuous Variables:
Outliers are extreme values in the data which are far away from most of the values. You can see them as the tails in the histogram.
Outlier must be treated one column at a time. As the treatment will be slightly different for each column.
Why I should treat the outliers?
Outliers bias the training of machine learning models. As the algorithm tries to fit the extreme value, it goes away from majority of the data.
There are below two options to treat outliers in the data.
In this data all the continuous variables have slightly skewed distribution, which is acceptable, hence no outlier treatment is required.
Missing values are treated for each column separately.
If a column has more than 30% data missing, then missing value treatment cannot be done. That column must be rejected because too much information is missing.
There are below options for treating missing values in data.
# Finding how many missing values are there for each column
CreditRiskData.isnull().sum()
No missing values in this data!
Now its time to finally choose the best columns(Features) which are correlated to the Target variable. This can be done directly by measuring the correlation values or ANOVA/Chi-Square tests. However, it is always helpful to visualize the relation between the Target variable and each of the predictors to get a better sense of data.
I have listed below the techniques used for visualizing relationship between two variables as well as measuring the strength statistically.
In this case study the Target variable is categorical, hence below two scenarios will be present
When the target variable is Categorical and the predictor variable is Continuous we analyze the relation using bar plots/Boxplots and measure the strength of relation using Anova test
# Box plots for Categorical Target Variable "GoodCredit" and continuous predictors
ContinuousColsList=['age','amount', 'duration']
import matplotlib.pyplot as plt
fig, PlotCanvas=plt.subplots(nrows=1, ncols=len(ContinuousColsList), figsize=(18,5))
# Creating box plots for each continuous predictor against the Target Variable "GoodCredit"
for PredictorCol , i in zip(ContinuousColsList, range(len(ContinuousColsList))):
CreditRiskData.boxplot(column=PredictorCol, by='GoodCredit', figsize=(5,5), vert=True, ax=PlotCanvas[i])
What should you look for in these box plots?
These plots gives an idea about the data distribution of continuous predictor in the Y-axis for each of the category in the X-Axis.
If the distribution looks similar for each category(Boxes are in the same line), that means the the continuous variable has NO effect on the target variable. Hence, the variables are not correlated to each other.
For example, look at the first chart "age" Vs "GoodCredit". The boxes are in the similar line! It means that people whose loan was rejected and whose loan was approved have same kind of age. Hence, I cannot distinguish between approval and rejection based on the age of an applicant. So this column is NOT correlated with the GoodCredit.
The other other two charts also exhibit opposite characteristics, hence "amount" and "duration" are correlated with the target variable.
We confirm this by looking at the results of ANOVA test below
Analysis of variance(ANOVA) is performed to check if there is any relationship between the given continuous and categorical variable
# Defining a function to find the statistical relationship with all the categorical variables
def FunctionAnova(inpData, TargetVariable, ContinuousPredictorList):
from scipy.stats import f_oneway
# Creating an empty list of final selected predictors
SelectedPredictors=[]
print('##### ANOVA Results ##### \n')
for predictor in ContinuousPredictorList:
CategoryGroupLists=inpData.groupby(TargetVariable)[predictor].apply(list)
AnovaResults = f_oneway(*CategoryGroupLists)
# If the ANOVA P-Value is <0.05, that means we reject H0
if (AnovaResults[1] < 0.05):
print(predictor, 'is correlated with', TargetVariable, '| P-Value:', AnovaResults[1])
SelectedPredictors.append(predictor)
else:
print(predictor, 'is NOT correlated with', TargetVariable, '| P-Value:', AnovaResults[1])
return(SelectedPredictors)
# Calling the function to check which categorical variables are correlated with target
ContinuousVariables=['age', 'amount','duration']
FunctionAnova(inpData=CreditRiskData, TargetVariable='GoodCredit', ContinuousPredictorList=ContinuousVariables)
The results of ANOVA confirm our visual analysis using box plots above!
Notice the P-Value of "age", it is just at the boundry of the threshold. This is something we already doubted in the box plots section already.
While the other two P-Values are clearly zero, hence they are correlated without doubt.
All three columns are correlated with GoodCredit.
When the target variable is Categorical and the predictor is also Categorical then we explore the correlation between them visually using barplots and statistically using Chi-square test
# Cross tablulation between two categorical variables
CrossTabResult=pd.crosstab(index=CreditRiskData['checkingstatus'], columns=CreditRiskData['GoodCredit'])
CrossTabResult
# Visual Inference using Grouped Bar charts
CategoricalColsList=['checkingstatus', 'history', 'purpose','savings','employ',
'installment', 'status', 'others','residence', 'property',
'otherplans', 'housing', 'cards', 'job', 'liable', 'tele', 'foreign']
import matplotlib.pyplot as plt
fig, PlotCanvas=plt.subplots(nrows=len(CategoricalColsList), ncols=1, figsize=(10,90))
# Creating Grouped bar plots for each categorical predictor against the Target Variable "GoodCredit"
for CategoricalCol , i in zip(CategoricalColsList, range(len(CategoricalColsList))):
CrossTabResult=pd.crosstab(index=CreditRiskData[CategoricalCol], columns=CreditRiskData['GoodCredit'])
CrossTabResult.plot.bar(color=['red','green'], ax=PlotCanvas[i])
What to look for in these grouped bar charts?
These grouped bar charts show the frequency in the Y-Axis and the category in the X-Axis. If the ratio of bars is similar across all categories, then the two columns are not correlated. For example, look at the "tele" Vs "GoodCredit" plot. The 0 vs 1 ratio for A191 is similar to A192, it means tele does not affect the Good/Bad Credit!. Hence, these two variables are not correlated.
On the other hand, look at the "history" vs "GoodCredit" plot. The number of Bad Credits are very high if history=A32 and A34. It means history affects the Good/Bad Credit! Hence, two columns are correlated with each other.
We confirm this analysis in below section by using Chi-Square Tests.
Chi-Square test is conducted to check the correlation between two categorical variables
# Writing a function to find the correlation of all categorical variables with the Target variable
def FunctionChisq(inpData, TargetVariable, CategoricalVariablesList):
from scipy.stats import chi2_contingency
# Creating an empty list of final selected predictors
SelectedPredictors=[]
for predictor in CategoricalVariablesList:
CrossTabResult=pd.crosstab(index=inpData[TargetVariable], columns=inpData[predictor])
ChiSqResult = chi2_contingency(CrossTabResult)
# If the ChiSq P-Value is <0.05, that means we reject H0
if (ChiSqResult[1] < 0.05):
print(predictor, 'is correlated with', TargetVariable, '| P-Value:', ChiSqResult[1])
SelectedPredictors.append(predictor)
else:
print(predictor, 'is NOT correlated with', TargetVariable, '| P-Value:', ChiSqResult[1])
return(SelectedPredictors)
CategoricalVariables=['checkingstatus', 'history', 'purpose','savings','employ',
'installment', 'status', 'others','residence', 'property',
'otherplans', 'housing', 'cards', 'job', 'liable', 'tele', 'foreign']
# Calling the function
FunctionChisq(inpData=CreditRiskData,
TargetVariable='GoodCredit',
CategoricalVariablesList= CategoricalVariables)
Based on the results of Chi-Square test, below categorical columns are selected as predictors for Machine Learning
'checkingstatus', 'history', 'purpose', 'savings', 'employ', 'status', 'others', 'property', 'otherplans', 'housing', 'foreign'
Based on the above tests, selecting the final columns for machine learning
SelectedColumns=['checkingstatus','history','purpose','savings','employ',
'status','others','property','otherplans','housing','foreign',
'age', 'amount', 'duration']
# Selecting final columns
DataForML=CreditRiskData[SelectedColumns]
DataForML.head()
# Saving this final data for reference during deployment
DataForML.to_pickle('DataForML.pkl')
List of steps performed on predictor variables before data can be used for machine learning
Based on the information on the column values explanation from the data website
https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data)
"employ" column has ordinal properties.
# Treating the Ordinal variable first
DataForML['employ'].replace({'A71':1, 'A72':2,'A73':3, 'A74':4,'A75':5 }, inplace=True)
# Treating the binary nominal variable
DataForML['foreign'].replace({'A201':1, 'A202':0}, inplace=True)
# Looking at data after nominal treatment
DataForML.head()
# Treating all the nominal variables at once using dummy variables
DataForML_Numeric=pd.get_dummies(DataForML)
# Adding Target Variable to the data
DataForML_Numeric['GoodCredit']=CreditRiskData['GoodCredit']
# Printing sample rows
DataForML_Numeric.head()
We dont use the full data for creating the model. Some data is randomly selected and kept aside for checking how good the model is. This is known as Testing Data and the remaining data is called Training data on which the model is built. Typically 70% of data is used as Training data and the rest 30% is used as Tesing data.
# Printing all the column names for our reference
DataForML_Numeric.columns
# Separate Target Variable and Predictor Variables
TargetVariable='GoodCredit'
Predictors=['employ', 'foreign', 'age', 'amount', 'duration', 'checkingstatus_A11',
'checkingstatus_A12', 'checkingstatus_A13', 'checkingstatus_A14',
'history_A30', 'history_A31', 'history_A32', 'history_A33',
'history_A34', 'purpose_A40', 'purpose_A41', 'purpose_A410',
'purpose_A42', 'purpose_A43', 'purpose_A44', 'purpose_A45',
'purpose_A46', 'purpose_A48', 'purpose_A49', 'savings_A61',
'savings_A62', 'savings_A63', 'savings_A64', 'savings_A65',
'status_A91', 'status_A92', 'status_A93', 'status_A94', 'others_A101',
'others_A102', 'others_A103', 'property_A121', 'property_A122',
'property_A123', 'property_A124', 'otherplans_A141', 'otherplans_A142',
'otherplans_A143', 'housing_A151', 'housing_A152', 'housing_A153']
X=DataForML_Numeric[Predictors].values
y=DataForML_Numeric[TargetVariable].values
# Split the data into training and testing set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=428)
You can choose not to run this step if you want to compare the resultant accuracy of this transformation with the accuracy of raw data.
However, if you are using KNN or Neural Networks, then this step becomes necessary.
### Sandardization of data ###
from sklearn.preprocessing import StandardScaler, MinMaxScaler
# Choose either standardization or Normalization
# On this data Min Max Normalization produced better results
# Choose between standardization and MinMAx normalization
#PredictorScaler=StandardScaler()
PredictorScaler=MinMaxScaler()
# Storing the fit object for later reference
PredictorScalerFit=PredictorScaler.fit(X)
# Generating the standardized values of X
X=PredictorScalerFit.transform(X)
# Split the data into training and testing set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Sanity check for the sampled data
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# Logistic Regression
from sklearn.linear_model import LogisticRegression
# choose parameter Penalty='l1' or C=1
# choose different values for solver 'newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'
clf = LogisticRegression(C=1,penalty='l2', solver='newton-cg')
# Printing all the parameters of logistic regression
# print(clf)
# Creating the model on Training Data
LOG=clf.fit(X_train,y_train)
prediction=LOG.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithmd
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(LOG, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
#Decision Trees
from sklearn import tree
# choose from different tunable hyper parameters
# Choose various values of max_depth and criterion for tuning the model
clf = tree.DecisionTreeClassifier(max_depth=4,criterion='gini')
# Printing all the parameters of Decision Trees
print(clf)
# Creating the model on Training Data
DTree=clf.fit(X_train,y_train)
prediction=DTree.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Plotting the feature importance for Top 10 most important columns
%matplotlib inline
feature_importances = pd.Series(DTree.feature_importances_, index=Predictors)
feature_importances.nlargest(10).plot(kind='barh')
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(DTree, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
# Installing the required library for plotting the decision tree
#!pip install dtreeplt
from dtreeplt import dtreeplt
dtree = dtreeplt(model=clf, feature_names=Predictors, target_names=TargetVariable)
fig = dtree.view()
currentFigure=plt.gcf()
currentFigure.set_size_inches(50,20)
# Double click on the graph to zoom in
# Random Forest (Bagging of multiple Decision Trees)
from sklearn.ensemble import RandomForestClassifier
# Choose various values of max_depth, n_estimators and criterion for tuning the model
clf = RandomForestClassifier(max_depth=10, n_estimators=100,criterion='gini')
# Printing all the parameters of Random Forest
print(clf)
# Creating the model on Training Data
RF=clf.fit(X_train,y_train)
prediction=RF.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(RF, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
# Plotting the feature importance for Top 10 most important columns
%matplotlib inline
feature_importances = pd.Series(RF.feature_importances_, index=Predictors)
feature_importances.nlargest(10).plot(kind='barh')
# max_depth=10 is too large to be plot here
# PLotting a single Decision Tree from Random Forest
#from dtreeplt import dtreeplt
#dtree = dtreeplt(model=clf.estimators_[4], feature_names=Predictors, target_names=TargetVariable)
#fig = dtree.view()
#currentFigure=plt.gcf()
#currentFigure.set_size_inches(100,40)
# Double click on the graph to zoom in
# Adaboost
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
# Choosing Decision Tree with 1 level as the weak learner
# Choose different values of max_depth, n_estimators and learning_rate to tune the model
DTC=DecisionTreeClassifier(max_depth=4)
clf = AdaBoostClassifier(n_estimators=200, base_estimator=DTC ,learning_rate=0.01)
# Printing all the parameters of Adaboost
print(clf)
# Creating the model on Training Data
AB=clf.fit(X_train,y_train)
prediction=AB.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(AB, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
# Plotting the feature importance for Top 10 most important columns
%matplotlib inline
feature_importances = pd.Series(AB.feature_importances_, index=Predictors)
feature_importances.nlargest(10).plot(kind='barh')
# PLotting 4th single Decision Tree from Adaboost
from dtreeplt import dtreeplt
dtree = dtreeplt(model=clf.estimators_[4], feature_names=Predictors, target_names=TargetVariable)
fig = dtree.view()
currentFigure=plt.gcf()
currentFigure.set_size_inches(50,40)
# Double click on the graph to zoom in
# Xtreme Gradient Boosting (XGBoost)
from xgboost import XGBClassifier
clf=XGBClassifier(max_depth=10, learning_rate=0.01, n_estimators=200, objective='binary:logistic', booster='gbtree')
# Printing all the parameters of XGBoost
print(clf)
# Creating the model on Training Data
XGB=clf.fit(X_train,y_train)
prediction=XGB.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(XGB, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
# Plotting the feature importance for Top 10 most important columns
%matplotlib inline
feature_importances = pd.Series(XGB.feature_importances_, index=Predictors)
feature_importances.nlargest(10).plot(kind='barh')
# max_depth=10 is too large to be plot here
#from xgboost import plot_tree
#import matplotlib.pyplot as plt
#fig, ax = plt.subplots(figsize=(100, 40))
#plot_tree(XGB, num_trees=10, ax=ax)
# Double click on the graph to zoom in
# K-Nearest Neighbor(KNN)
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=3)
# Printing all the parameters of KNN
print(clf)
# Creating the model on Training Data
KNN=clf.fit(X_train,y_train)
prediction=KNN.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(KNN, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
# Plotting the feature importance for Top 10 most important columns
# There is no built-in method to get feature importance in KNN
# Support Vector Machines(SVM)
from sklearn import svm
clf = svm.SVC(C=2, kernel='rbf', gamma=0.1)
# Printing all the parameters of KNN
print(clf)
# Creating the model on Training Data
SVM=clf.fit(X_train,y_train)
prediction=SVM.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(SVM, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
# Plotting the feature importance for Top 10 most important columns
# The built in attribute SVM.coef_ works only for linear kernel
%matplotlib inline
#feature_importances = pd.Series(SVM.coef_[0], index=Predictors)
#feature_importances.nlargest(10).plot(kind='barh')
# Naive Bays
from sklearn.naive_bayes import GaussianNB, MultinomialNB
# GaussianNB is used in Binomial Classification
# MultinomialNB is used in multi-class classification
clf = GaussianNB()
#clf = MultinomialNB()
# Printing all the parameters of Naive Bayes
print(clf)
NB=clf.fit(X_train,y_train)
prediction=NB.predict(X_test)
# Measuring accuracy on Testing Data
from sklearn import metrics
print(metrics.classification_report(y_test, prediction))
print(metrics.confusion_matrix(y_test, prediction))
# Printing the Overall Accuracy of the model
F1_Score=metrics.f1_score(y_test, prediction, average='weighted')
print('Accuracy of the model on Testing Sample Data:', round(F1_Score,2))
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(NB, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
Based on the above trials you select that algorithm which produces the best average accuracy. In this case, multiple algorithms have produced similar kind of average accuracy. Hence, we can choose any one of them.
I am choosing SVM as the final model since it is very fast on this high dimensional data.
In order to deploy the model we follow below steps
Its beneficial to keep lesser number of predictors for the model while deploying it in production. The lesser predictors you keep, the better because, the model will be less dependent hence, more stable.
This is important specially when the data is high dimensional(too many predictor columns).
In this data, the most important predictor variables are 'employ', 'age', 'amount', 'duration','checkingstatus', 'history', 'purpose', 'savings', and 'status'. As these are consistently on top of the variable importance chart for every algorithm. Hence choosing these as final set of predictor variables.
# Separate Target Variable and Predictor Variables
TargetVariable='GoodCredit'
# Selecting the final set of predictors for the deployment
# Based on the variable importance charts of multiple algorithms above
Predictors=['employ', 'age', 'amount', 'duration','checkingstatus_A11',
'checkingstatus_A12', 'checkingstatus_A13', 'checkingstatus_A14',
'history_A30', 'history_A31', 'history_A32', 'history_A33',
'history_A34', 'purpose_A40', 'purpose_A41', 'purpose_A410',
'purpose_A42', 'purpose_A43', 'purpose_A44', 'purpose_A45',
'purpose_A46', 'purpose_A48', 'purpose_A49','savings_A61',
'savings_A62', 'savings_A63', 'savings_A64', 'savings_A65',
'status_A91', 'status_A92', 'status_A93', 'status_A94']
X=DataForML_Numeric[Predictors].values
y=DataForML_Numeric[TargetVariable].values
### Sandardization of data ###
from sklearn.preprocessing import StandardScaler, MinMaxScaler
# Choose either standardization or Normalization
# On this data Min Max Normalization produced better results
# Choose between standardization and MinMAx normalization
#PredictorScaler=StandardScaler()
PredictorScaler=MinMaxScaler()
# Storing the fit object for later reference
PredictorScalerFit=PredictorScaler.fit(X)
# Generating the standardized values of X
X=PredictorScalerFit.transform(X)
print(X.shape)
print(y.shape)
# Using the SVM algorithm with final hyperparamters
from sklearn import svm
clf = svm.SVC(C=4, kernel='rbf', gamma=0.1)
# Training the model on 100% Data available
Final_SVM_Model=clf.fit(X,y)
# Importing cross validation function from sklearn
from sklearn.model_selection import cross_val_score
# Running 10-Fold Cross validation on a given algorithm
# Passing full data X and y because the K-fold will split the data and automatically choose train/test
Accuracy_Values=cross_val_score(Final_SVM_Model, X , y, cv=10, scoring='f1_weighted')
print('\nAccuracy values for 10-fold Cross Validation:\n',Accuracy_Values)
print('\nFinal Average Accuracy of the model:', round(Accuracy_Values.mean(),2))
import pickle
import os
# Saving the Python objects as serialized files can be done using pickle library
# Here let us save the Final ZomatoRatingModel
with open('Final_SVM_Model.pkl', 'wb') as fileWriteStream:
pickle.dump(Final_SVM_Model, fileWriteStream)
# Don't forget to close the filestream!
fileWriteStream.close()
print('pickle file of Predictive Model is saved at Location:',os.getcwd())
# This Function can be called from any from any front end tool/website
def PredictLoanStatus(InputLoanDetails):
import pandas as pd
Num_Inputs=InputLoanDetails.shape[0]
# Making sure the input data has same columns as it was used for training the model
# Also, if standardization/normalization was done, then same must be done for new input
# Appending the new data with the Training data
DataForML=pd.read_pickle('DataForML.pkl')
InputLoanDetails=InputLoanDetails.append(DataForML)
# Treating the Ordinal variable first
InputLoanDetails['employ'].replace({'A71':1, 'A72':2,'A73':3, 'A74':4,'A75':5 }, inplace=True)
# Generating dummy variables for rest of the nominal variables
InputLoanDetails=pd.get_dummies(InputLoanDetails)
# Maintaining the same order of columns as it was during the model training
Predictors=['employ', 'age', 'amount', 'duration','checkingstatus_A11',
'checkingstatus_A12', 'checkingstatus_A13', 'checkingstatus_A14',
'history_A30', 'history_A31', 'history_A32', 'history_A33',
'history_A34', 'purpose_A40', 'purpose_A41', 'purpose_A410',
'purpose_A42', 'purpose_A43', 'purpose_A44', 'purpose_A45',
'purpose_A46', 'purpose_A48', 'purpose_A49','savings_A61',
'savings_A62', 'savings_A63', 'savings_A64', 'savings_A65',
'status_A91', 'status_A92', 'status_A93', 'status_A94']
# Generating the input values to the model
X=InputLoanDetails[Predictors].values[0:Num_Inputs]
# Generating the standardized values of X since it was done while model training also
X=PredictorScalerFit.transform(X)
# Loading the Function from pickle file
import pickle
with open('Final_SVM_Model.pkl', 'rb') as fileReadStream:
AdaBoost_model=pickle.load(fileReadStream)
# Don't forget to close the filestream!
fileReadStream.close()
# Genrating Predictions
Prediction=AdaBoost_model.predict(X)
PredictedStatus=pd.DataFrame(Prediction, columns=['Predicted Status'])
return(PredictedStatus)
# Calling the function for some loan applications manually
NewLoanApplications=pd.DataFrame(
data=[['A73',22,5951,48,'A12','A32','A43','A61','A92'],
['A72',40,8951,24,'A12','A32','A43','A61','A92']],
columns=['employ', 'age', 'amount', 'duration','checkingstatus',
'history', 'purpose', 'savings','status'])
print(NewLoanApplications)
# Calling the Function for prediction
PredictLoanStatus(InputLoanDetails= NewLoanApplications)
The Function PredictLoanStatus can be used to produce the predictions for one or more loan applications at a time. Hence, it can be scheduled using a batch job or cron job to run every night and generate predictions for all the loan applications available in the system.
# Creating the function which can take loan inputs and perform prediction
def FunctionLoanPrediction(inp_employ, inp_age , inp_amount, inp_duration,
inp_checkingstatus,inp_history, inp_purpose,
inp_savings, inp_status):
SampleInputData=pd.DataFrame(
data=[[inp_employ, inp_age , inp_amount, inp_duration,
inp_checkingstatus,inp_history, inp_purpose, inp_savings, inp_status]],
columns=['employ', 'age', 'amount', 'duration','checkingstatus',
'history', 'purpose', 'savings','status'])
# Calling the function defined above using the input parameters
Predictions=PredictLoanStatus(InputLoanDetails= SampleInputData)
# Returning the predicted loan status
return(Predictions.to_json())
# Function call
FunctionLoanPrediction(inp_employ='A73',
inp_age= 22,
inp_amount=5951,
inp_duration=48,
inp_checkingstatus='A12',
inp_history='A32',
inp_purpose='A43',
inp_savings='A61',
inp_status='A92')
# Installing the flask library required to create the API
#!pip install flask
from flask import Flask,request,jsonify
import pickle
import pandas as pd
import numpy
app = Flask(__name__)
@app.route('/get_loan_prediction', methods=["GET"])
def get_loan_prediction():
try:
# Getting the paramters from API call
employ_value = request.args.get('employ')
age_value = float(request.args.get('age'))
amount_value=float(request.args.get('amount'))
duration_value=float(request.args.get('duration'))
checkingstatus_value=request.args.get('checkingstatus')
history_value=request.args.get('history')
purpose_value=request.args.get('purpose')
savings_value=request.args.get('savings')
status_value=request.args.get('PropertyArea')
# Calling the funtion to get loan approval status
prediction_from_api=FunctionLoanPrediction(
inp_employ=employ_value,
inp_age= age_value,
inp_amount=amount_value,
inp_duration=duration_value,
inp_checkingstatus=checkingstatus_value,
inp_history=history_value,
inp_purpose=purpose_value,
inp_savings=savings_value,
inp_status=status_value)
return (prediction_from_api)
except Exception as e:
return('Something is not right!:'+str(e))
import os
if __name__ =="__main__":
# Hosting the API in localhost
app.run(host='127.0.0.1', port=8080, threaded=True, debug=True, use_reloader=False)
# Interrupt kernel to stop the API
This URL can be called by any front end application like Java, Tableau etc. Once the parameters are passed to it, the predictions will be generated.